18 research outputs found

    On Non-Bayesian Social Learning

    Full text link
    We study a model of information aggregation and social learning recently proposed by Jadbabaie, Sandroni, and Tahbaz-Salehi, in which individual agents try to learn a correct state of the world by iteratively updating their beliefs using private observations and beliefs of their neighbors. No individual agent's private signal might be informative enough to reveal the unknown state. As a result, agents share their beliefs with others in their social neighborhood to learn from each other. At every time step each agent receives a private signal, and computes a Bayesian posterior as an intermediate belief. The intermediate belief is then averaged with the belief of neighbors to form the individual's belief at next time step. We find a set of minimal sufficient conditions under which the agents will learn the unknown state and reach consensus on their beliefs without any assumption on the private signal structure. The key enabler is a result that shows that using this update, agents will eventually forecast the indefinite future correctly

    Essays on Learning in Social Networks

    Get PDF
    Over the past few years, online social networks have become nearly ubiquitous, reshaping our social interactions as in no other point in history. The preeminent aspect of this social media revolution is arguably an almost complete transformation of the ways in which we acquire, process, store, and use information. In view of the evolving nature of social networks and their increasing complexity, development of formal models of social learning is imperative for a better understanding of the role of social networks in phenomena such as opinion formation, information aggregation, and coordination. This thesis takes a step in this direction by introducing and analyzing novel models of learning and coordination over networks. In particular, we provide answers to the following questions regarding a group of individuals who interact over a social network: 1) Do repeated communications between individuals with different subjective beliefs and pieces of information about a common true state lead them to eventually reach an agreement? 2) Do the individuals efficiently aggregate through their social interactions the information that is dispersed throughout the society? 3) And if so, how long does it take the individuals to aggregate the dispersed information and reach an agreement? This thesis provides answers to these questions given three different assumptions on the individuals\u27 behavior in response to new information. We start by studying the behavior of a group of individuals who are fully rational and are only concerned with discovering the truth. We show that communications between rational individuals with access to complementary pieces of information eventually direct everyone to discover the truth. Yet in spite of its axiomatic appeal, fully rational agent behavior may not be a realistic assumption when dealing with large societies and complex networks due to the extreme computational complexity of Bayesian inference. Motivated by this observation, we next explore the implications of bounded rationality by introducing biases in the way agents interpret the opinions of others while at the same time maintaining the assumption that agents interpret their private observations rationally. Our analysis yields the result that when faced with overwhelming evidence in favor of the truth even biased agents will eventually learn to discover the truth. We further show that the rate of learning has a simple analytical characterization in terms of the relative entropy of agents\u27 signal structures and their eigenvector centralities and use the characterization to perform comparative analysis. Finally, in the last chapter of the thesis, we introduce and analyze a novel model of opinion formation in which agents not only seek to discover the truth but also have the tendency to act in conformity with the rest of the population. Preference for conformity is relevant in scenarios ranging from participation in popular movements and following fads to trading in stock market. We argue that myopic agents who value conformity do not necessarily fully aggregate the dispersed information; nonetheless, we prove that examples of the failure of information aggregation are rare in a precise sense

    Bayesian Quadratic Network Game Filters

    Full text link
    A repeated network game where agents have quadratic utilities that depend on information externalities -- an unknown underlying state -- as well as payoff externalities -- the actions of all other agents in the network -- is considered. Agents play Bayesian Nash Equilibrium strategies with respect to their beliefs on the state of the world and the actions of all other nodes in the network. These beliefs are refined over subsequent stages based on the observed actions of neighboring peers. This paper introduces the Quadratic Network Game (QNG) filter that agents can run locally to update their beliefs, select corresponding optimal actions, and eventually learn a sufficient statistic of the network's state. The QNG filter is demonstrated on a Cournot market competition game and a coordination game to implement navigation of an autonomous team

    Essay on beliefs and the macroeconomy

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019Cataloged from PDF version of thesis.Includes bibliographical references (pages 167-182).This thesis consists of three essays. The first essay explores a form of bounded rationality where agents learn about the economy with possibly misspecified models. I consider a recursive general-equilibrium framework that nests a large class of macroeconomic models. Misspecification is represented as a constraint on the set of beliefs agents can entertain. I introduce the solution concept of constrained-rational-expectations equilibrium (CREE), in which each agent selects the belief from her constrained set that is closest to the endogenous distribution of observables in the Kullback-Leibler divergence. If the set of permissible beliefs contains the rational-expectations equilibria (REE), then the REE are CREE; otherwise, they are not. I show that a CREE exists, that it arises naturally as the limit of adaptive and Bayesian learning, and that it incorporates a version of the Lucas critique.I then apply CREE to a particular novel form of bounded rationality where beliefs are constrained to factor models with a small number of endogenously chosen factors. Misspecification leads to amplification or dampening of shocks and history dependence. The calibrated economy exhibits hump-shaped impulse responses and co-movements in consumption, output, hours, and investment that resemble business-cycle fluctuations. In the second essay, I ask the following question: What are the testable restrictions imposed on the dynamics of an agent's belief by the hypothesis of Bayesian rationality, which do not rely on the additional assumption that the agent has an objectively correct prior? In this paper, I argue that there are essentially no such restrictions. I consider an agent who chooses a sequence of actions and an econometrician who observes the agent's actions and is interested in testing the hypothesis that the agent is Bayesian.I argue that--absent a priori knowledge on the part of the econometrician on the set of models considered by the agent--there are almost no observations that would lead the econometrician to conclude that the agent is not Bayesian. This result holds even if the set of actions is sufficiently rich that the agent's action fully reveals her belief about the payoff-relevant state and even if the econometrician observes a large number of identical agents facing the same sequence of decision problems. In the third essay, I propose an equilibrium search and matching model with permanent worker heterogeneity, asymmetric information, and endogenous separations and study the dynamics of adverse selection in the labor market. The interaction between asymmetric information and endogenous separations leads to a cyclical adverse selection problem that has testable predictions both for the aggregate variables and for individual workers' outcomes.First, a deterioration in the distribution of ability in the pool of the unemployed leads firms to raise their hiring standards, thus resulting in shifting out of the Beveridge curve. Second, if the separation rate is log-supermodular (log-submodular) in productivity and ability, the pool of the unemployed becomes more (less) adversely selected in downturns. Third, firms rationally discriminate against the long-term unemployed by demanding more unequivocally positive signals of their ability before hiring them. Fourth, this scarring effect is more (less) severe for lower-ability workers and after deeper recessions if the separation rate is log-supermodular (log-submodular). I conclude by providing conditions on the fundamentals of the economy that lead to log-supermodular and log-submodular separation rates.by Pooya Molavi.Ph. D.Ph.D. Massachusetts Institute of Technology, Department of Economic

    Simple Models and Biased Forecasts

    Full text link
    This paper proposes a framework in which agents are constrained to use simple models to forecast economic variables and characterizes the resulting biases. It considers agents who can only entertain state-space models with no more than dd states, where dd measures the intertemporal complexity of a model. Agents are boundedly rational in that they can only consider models that are too simple to nest the true process, yet they use the best model among those considered. I show that using simple models adds persistence to forward-looking decisions and increases the comovement among them. I then explain how this insight can bring the predictions of three workhorse macroeconomic models closer to data. In the new-Keynesian model, forward guidance becomes less powerful. In the real business cycle model, consumption responds more sluggishly to productivity shocks. The Diamond--Mortensen--Pissarides model exhibits more internal propagation and more realistic comovement in response to productivity and separation shocks

    Polarization and Media Bias

    Full text link
    This paper presents a model of partisan media trying to persuade a sophisticated and heterogeneous audience. We base our analysis on a Bayesian persuasion framework where receivers have heterogeneous preferences and beliefs. We identify an intensive-versus-extensive-margin trade-off that drives the media's choice of slant: Biasing the news garners more support from the audience who follows the media but reduces the size of the audience. The media's slant and target audience are qualitatively different in polarized and unimodal (or non-polarized) societies. When the media's agenda becomes more popular, the media become more biased. When society becomes more polarized, the media become less biased. Thus, polarization may have an unexpected consequence: It may compel partisan media to be less biased and more informative

    Theory of Non-Bayesian Social Learning

    No full text

    Non-Bayesian Social Learning

    Get PDF
    We develop a dynamic model of opinion formation in social networks when the information required for learning a payoff-relevant parameter may not be at the disposal of any single agent. Individuals engage in communication with their neighbors in order to learn from their experiences. However, instead of incorporating the views of their neighbors in a fully Bayesian manner, agents use a simple updating rule which linearly combines their personal experience and the views of their neighbors (even though the neighbors ’ views may be quite inaccurate). This non-Bayesian learning rule is motivated by the formidable complexity required to fully implement Bayesian updating in networks. We show that, as long as individuals take their personal signals into account in a Bayesian way, repeated interactions lead them to successfully aggregate information and learn the true underlying state of the world. This result holds in spite of the apparent naïveté of agents’ updating rule, the agents ’ need for information from sources the existence of which they may not be aware of, the possibility that the most persuasive agents in the network are precisely those least informed and with worst prior views, and the assumption that no agent ca
    corecore